-
Notifications
You must be signed in to change notification settings - Fork 0
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Initial implementation #1
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We must be aware of the custom license of the sxt-proof-of-sql
dependency. I think it's fine releasing this library under the Apache 2.0 license, because of this part of their license:
If Licensee makes changes or additions to the Software, Licensee
may license those changes or additions under terms of Licensee's
choice, but Licensee cannot change this license for the
Software.
However, users of this library must be aware of these additional restrictions:
Space and Time grants you ("Licensee") a license to use, modify,
and redistribute the Software, but only (a) for Non-Commercial
Use, or (b) for Commercial Use with the Space and Time Platform
only. You must use the Software only as allowed by applicable
law.
and
"Non-Commercial Use" means personal, academic, scientific, or
research and development use, or evaluating the Software, but
does not include uses where the Software facilitates any
transaction of economic value or supports any system that
provides any economic or commercial value other than on the
Space and Time Platform.
I would follow the same approach as theirs, and put their license inside a third_party/license/sxt-proof-of-sql-LICENSE
file in this repo.
if let Some(metadata) = commitment | ||
.column_commitments() | ||
.get_metadata(&column.column_id()) | ||
{ | ||
if metadata.column_type() != column.column_type() { | ||
return Err(VerifyError::InvalidInput); | ||
} | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe this check is not necessary. As a matter of fact, the accessor
in VerifiableQueryResult::verify
is only required to implement the CommitmentAccessor
, and MetadataAccessor
traits, which do not access the ColumnType
of the commitments. Therefore, if the objective of this check is merely to prevent possible panic
s, this check should be redundant.
However, if we decide to keep this check, we should include another else
block to return an InvalidInput
error in case the get_metadata(&column.column_id()
invocation do return a None
if result.table != query_data.table || result.verification_hash != query_data.verification_hash | ||
{ | ||
Err(VerifyError::VerificationFailed) | ||
} else { | ||
Ok(()) | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we double check if this is indeed redundant ?
From the SxT call it seems that VerifiableQueryResult already contains VerifiableQueryData which makes this check, and QueryData, redundant. But please double check.
cc. @lgiussan
// read last usize from the buffer as max_nu is the last field in the struct, and check if it matches N | ||
// max_nu is not accessible from the VerifierSetup struct, so we need to check it from the buffer | ||
let max_nu = slice_to_usize(&value[value.len() - std::mem::size_of::<usize>()..]); | ||
if max_nu != N { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I assume you are using this constant generic to make it easy, once we integrate the pallet, to define benchmarks according to different key sizes. However, when you'll develop the pallet, you should assign a value to this N and I assume you are going to use some kind of MAX_VAL. If that's the case then this check should be if max_nu > N
otherwise we won't be able to accept keys with smaller size
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I understand this, N has to be MAX_VAL. One of the questions to SxT is about this parameter.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
So you are saying they are going to use the maximum possible size of the key regardless of the size of the underlying DB and the corresponding queries ?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not maximum possible, it will be too unbearable. I would expect to have the value which might fit 80-90% of the cases. The value has direct dependency on size of the VK.
Please note that there might be changes in the code, as the S&T libraries are under active development.